Interdisciplinary Centre for Security, Reliability and Trust
Abstract:Very High Throughput satellites typically provide multibeam coverage, however, a common problem is that there can be a mismatch between the capacity of each beam and the traffic demand: some beams may fall short, while others exceed the requirements. This challenge can be addressed by integrating machine learning with flexible payload and adaptive beamforming techniques. These methods allow for dynamic allocation of payload resources based on real-time capacity needs. As artificial intelligence advances, its ability to automate tasks, enhance efficiency, and increase precision is proving invaluable, especially in satellite communications, where traditional optimization methods are often computationally intensive. AI-driven solutions offer faster, more effective ways to handle complex satellite communication tasks. Artificial intelligence in space has more constraints than other fields, considering the radiation effects, the spaceship power capabilities, mass, and area. Current onboard processing uses legacy space-certified general-purpose processors, costly application-specific integrated circuits, or field-programmable gate arrays subjected to a highly stringent certification process. The increased performance demands of onboard processors to satisfy the accelerated data rates and autonomy requirements have rendered current space-graded processors obsolete. This work is focused on transforming the satellite payload using artificial intelligence and machine learning methodologies over available commercial off-the-shelf chips for onboard processing. The objectives include validating artificial intelligence-driven scenarios, focusing on flexible payload and adaptive beamforming as machine learning models onboard. Results show that machine learning models significantly improve signal quality, spectral efficiency, and throughput compared to conventional payload.
Abstract:Non-diagonal reconfigurable intelligent surfaces (RIS) offer enhanced wireless signal manipulation over conventional RIS by enabling the incident signal on any of its $M$ elements to be reflected from another element via an $M \times M$ switch array. To fully exploit this flexible configuration, the acquisition of individual channel state information (CSI) is essential. However, due to the passive nature of the RIS, cascaded channel estimation is performed, as the RIS itself lacks signal processing capabilities. This entails estimating the CSI for all $M \times M$ switch array permutations, resulting in a total of $M!$ possible configurations, to identify the optimal one that maximizes the channel gain. This process leads to long uplink training intervals, which degrade spectral efficiency and increase uplink energy consumption. In this paper, we propose a low-complexity channel estimation protocol that substantially reduces the need for exhaustive $M!$ permutations by utilizing only three configurations to optimize the non-diagonal RIS switch array and beamforming for single-input single-output (SISO) and multiple-input single-output (MISO) systems. Specifically, our three-stage pilot-based protocol estimates scaled versions of the user-RIS and RIS-base-station (BS) channels in the first two stages using the least square (LS) estimator and the commonly used ON/OFF protocol from conventional RIS. In the third stage, the cascaded user-RIS-BS channels are estimated to enable efficient beamforming optimization. Complexity analysis shows that our proposed protocol significantly reduces the BS computational load from $\mathcal{O}(NM\times M!)$ to $\mathcal{O}(NM)$, where $N$ is the number of BS antennas. This complexity is similar to the conventional ON/OFF-based LS estimation for conventional diagonal RIS.
Abstract:This study introduces ResNet-GLUSE, a lightweight ResNet variant enhanced with Gated Linear Unit-enhanced Squeeze-and-Excitation (GLUSE), an adaptive channel-wise attention mechanism. By integrating dynamic gating into the traditional SE framework, GLUSE improves feature recalibration while maintaining computational efficiency. Experiments on EuroSAT and PatternNet datasets confirm its effectiveness, achieving exceeding \textbf{94\% and 98\% accuracy}, respectively. While \textbf{MobileViT achieves 99\% accuracy}, ResNet-GLUSE offers \textbf{33x fewer parameters, 27x fewer FLOPs, 33x smaller model size (MB), $\approx$6x lower power consumption (W), and $\approx$3x faster inference time (s)}, making it significantly more efficient for onboard satellite deployment. Furthermore, due to its simplicity, ResNet-GLUSE can be easily mimicked for \textbf{neuromorphic computing}, enabling ultra-low power inference at just \textbf{852.30 mW} on Akida Brainchip. This balance between high accuracy and ultra-low resource consumption establishes ResNet-GLUSE as a practical solution for real-time Earth Observation (EO) tasks. Reproducible codes are available in our shared repository.
Abstract:This paper investigates joint device activity detection and channel estimation for grant-free random access in Low-earth orbit (LEO) satellite communications. We consider uplink communications from multiple single-antenna terrestrial users to a LEO satellite equipped with a uniform planar array of multiple antennas, where orthogonal frequency division multiplexing (OFDM) modulation is adopted. To combat the severe Doppler shift, a transmission scheme is proposed, where the discrete prolate spheroidal basis expansion model (DPS-BEM) is introduced to reduce the number of unknown channel parameters. Then the vector approximate message passing (VAMP) algorithm is employed to approximate the minimum mean square error estimation of the channel, and the Markov random field is combined to capture the channel sparsity. Meanwhile, the expectation-maximization (EM) approach is integrated to learn the hyperparameters in priors. Finally, active devices are detected by calculating energy of the estimated channel. Simulation results demonstrate that the proposed method outperforms conventional algorithms in terms of activity error rate and channel estimation precision.
Abstract:Provisioning secrecy for all users, given the heterogeneity and uncertainty of their channel conditions, locations, and the unknown location of the attacker/eavesdropper, is challenging and not always feasible. This work takes the first step to guarantee secrecy for all users where a low resolution intelligent reflecting surfaces (IRS) is used to enhance legitimate users' reception and thwart the potential eavesdropper (Eve) from intercepting. In real-life scenarios, due to hardware limitations of the IRS' passive reflective elements (PREs), the use of a full-resolution (continuous) phase shift (CPS) is impractical. In this paper, we thus consider a more practical case where the phase shift (PS) is modeled by a low-resolution (quantized) phase shift (QPS) while addressing the phase shift error (PSE) induced by the imperfect channel state information (CSI). To that end, we aim to maximize the minimum secrecy rate (SR) among all users by jointly optimizing the transmitter's beamforming vector and the IRS's passive reflective elements (PREs) under perfect/imperfect/unknown CSI. The resulting optimization problem is non-convex and even more complicated under imperfect/unknown CSI.
Abstract:This paper proposes a framework for robust design of UAV-assisted wireless networks that combine 3D trajectory optimization with user mobility prediction to address dynamic resource allocation challenges. We proposed a sparse second-order prediction model for real-time user tracking coupled with heuristic user clustering to balance service quality and computational complexity. The joint optimization problem is formulated to maximize the minimum rate. It is then decomposed into user association, 3D trajectory design, and resource allocation subproblems, which are solved iteratively via successive convex approximation (SCA). Extensive simulations demonstrate: (1) near-optimal performance with $\epsilon \approx 0.67\%$ deviation from upper-bound solutions, (2) $16\%$ higher minimum rates for distant users compared to non-predictive 3D designs, and (3) $10-30\%$ faster outage mitigation than time-division benchmarks. The framework's adaptive speed control enables precise mobile user tracking while maintaining energy efficiency under constrained flight time. Results demonstrate superior robustness in edge-coverage scenarios, making it particularly suitable for $5G/6G$ networks.
Abstract:This paper studies the problem of hybrid holographic beamforming for sum-rate maximization in a communication system assisted by a reconfigurable holographic surface. Existing methodologies predominantly rely on gradient-based or approximation techniques necessitating iterative optimization for each update of the holographic response, which imposes substantial computational overhead. To address these limitations, we establish a mathematical relationship between the mean squared error (MSE) criterion and the holographic response of the RHS to enable alternating optimization based on the minimum MSE (MMSE). Our analysis demonstrates that this relationship exhibits a quadratic dependency on each element of the holographic beamformer. Exploiting this property, we derive closed-form optimal expressions for updating the holographic beamforming weights. Our complexity analysis indicates that the proposed approach exhibits only linear complexity in terms of the RHS size, thus, ensuring scalability for large-scale deployments. The presented simulation results validate the effectiveness of our MMSE-based holographic approach, providing useful insights.
Abstract:Beyond diagonal reconfigurable intelligent surfaces (BD-RIS) have emerged as a transformative technology for enhancing wireless communication by intelligently manipulating the propagation environment. This paper explores the potential of BD-RIS in improving cognitive radio enabled multilayer non-terrestrial networks (NTNs). It is assumed that a high-altitude platform station (HAPS) has set up the primary network, while an uncrewed aerial vehicle (UAV) establishes the secondary network in the HAPS footprint. We formulate a joint optimization problem to maximize the secrecy rate by optimizing BD-RIS phase shifts and the secondary transmitter power allocation while controlling the interference temperature from the secondary network to the primary network. To solve this problem efficiently, we decouple the original problem into two sub-problems, which are solved iteratively by relying on alternating optimization. Simulation results demonstrate the effectiveness of BD-RIS in cognitive radio-enabled multilayer NTNs to accommodate the secondary network while satisfying the constraints imposed from the primary network.
Abstract:Beyond diagonal reconfigurable intelligent surfaces (BD-RIS) have emerged as a transformative technology for enhancing wireless communication by intelligently manipulating the propagation environment. Its interconnected elements offer enhanced control over signal redirection, making it a promising solution for integrated terrestrial and non-terrestrial networks (NTNs). This paper explores the potential of BD-RIS in improving cognitive radio enabled multilayer non-terrestrial networks. We formulate a joint optimization problem that maximizes the achievable spectral efficiency by optimizing BD-RIS phase shifts and secondary transmitter power allocation while controlling the interference temperature from the secondary network to the primary network. To solve this problem efficiently, we decouple the original problem and propose a novel solution based on an alternating optimization approach. Simulation results demonstrate the effectiveness of BD-RIS in cognitive radio enabled multilayer NTNs.
Abstract:The integration of machine learning (ML) has significantly enhanced the capabilities of Earth Observation (EO) systems by enabling the extraction of actionable insights from complex datasets. However, the performance of data-driven EO applications is heavily influenced by the data collection and transmission processes, where limited satellite bandwidth and latency constraints can hinder the full transmission of original data to the receivers. To address this issue, adopting the concepts of Semantic Communication (SC) offers a promising solution by prioritizing the transmission of essential data semantics over raw information. Implementing SC for EO systems requires a thorough understanding of the impact of data processing and communication channel conditions on semantic loss at the processing center. This work proposes a novel data-fitting framework to empirically model the semantic loss using real-world EO datasets and domain-specific insights. The framework quantifies two primary types of semantic loss: (1) source coding loss, assessed via a data quality indicator measuring the impact of processing on raw source data, and (2) transmission loss, evaluated by comparing practical transmission performance against the Shannon limit. Semantic losses are estimated by evaluating the accuracy of EO applications using four task-oriented ML models, EfficientViT, MobileViT, ResNet50-DINO, and ResNet8-KD, on lossy image datasets under varying channel conditions and compression ratios. These results underpin a framework for efficient semantic-loss modeling in bandwidth-constrained EO scenarios, enabling more reliable and effective operations.